Compare Page

Statistical validity

Characteristic Name: Statistical validity
Dimension: Validity
Description: Computed data must be statistically valid
Granularity: Information object
Implementation Type: Process-based approach
Characteristic Type: Usage

Verification Metric:

The number of tasks failed or under performed due to lack of statistical validity in data
The number of complaints received due to lack of statistical validity of data

GuidelinesExamplesDefinitons

The implementation guidelines are guidelines to follow in regard to the characteristic. The scenarios are examples of the implementation

Guidelines: Scenario:
Establish the population of interest unambiguously with appropriate justification (maintain documentation) (1) Both credit customers and cash customers are considered for a survey on customer satisfaction.
Establish an appropriate sampling method with appropriate justification (1) Stratified sampling is used to investigate drug preference of the medical officers
Establish statistical validity of samples -avoid over coverage and under coverage (maintain documentation) (1) Samples are taken from all income levels in a survey on vaccination
Maintain consistency of samples in case longitudinal analysis is performed. (Maintain documentation) (1) Same population is used over the time to collect epidemic data for a longitudinal analysis
Ensure that valid statistical methods are used to enable valid inferences about data, valid comparisons of parameters and generalise the findings. (1) Poisson distribution is used to make inferences since data generating events are occurred in a fixed interval of time and/or space
Ensure that the acceptable variations for estimated parameters are established with appropriate justifications (1) 95% confidence interval is used in estimating the mean value
Ensure that appropriate imputation measures are taken to nullify the impact of problems relating to outliers, data collection and data collection procedures and the edit rules are defined and maintained. (1) Incomplete responses are removed from the final data sample

Validation Metric:

How mature is the process to maintain statistical validity of data

These are examples of how the characteristic might occur in a database.

Example: Source:
if a column should contain at least one occurrence of all 50 states, but the column contains only 43 states, then the population is incomplete. Y. Lee, et al., “Journey to Data Quality”, Massachusetts Institute of Technology, 2006.

The Definitions are examples of the characteristic that appear in the sources provided.

Definition: Source:
Coherence of data refers to the internal consistency of the data. Coherence can be evaluated by determining if there is coherence between different data items for the same point in time, coherence between the same data items for different points in time or coherence between organisations or internationally. Coherence is promoted through the use of standard data concepts, classifications and target populations. HIQA 2011. International Review of Data Quality Health Information and Quality Authority (HIQA), Ireland. http://www.hiqa.ie/press-release/2011-04-28-international-review-data-quality.
1) Accuracy in the general statistical sense denotes the closeness of computations or estimates to the exact or true values.

2) Coherence of statistics is their adequacy to be reliably combined in different ways and for various uses.

LYON, M. 2008. Assessing Data Quality ,
Monetary and Financial Statistics.
Bank of England. http://www.bankofengland.co.uk/
statistics/Documents/ms/articles/art1mar08.pdf.

 

Referential integrity

Characteristic Name: Referential integrity
Dimension: Consistency
Description: Data relationships are represented through referential integrity rules
Granularity: Record
Implementation Type: Rule-based approach
Characteristic Type: Declarative

Verification Metric:

The number of referential integrity violations per thousand records

GuidelinesExamplesDefinitons

The implementation guidelines are guidelines to follow in regard to the characteristic. The scenarios are examples of the implementation

Guidelines: Scenario:
Implement and maintain foreign keys across tables (Data sets) (1) Implementation of foreign keys
Implement proper validation rules/Automated suggestions of values based on popular value combinations, to prevent incorrect references of foreign keys (1) The attribute Customer_Zip_Code of the Customer relation contains the value 4415, instead of 4445; both zip codes exist in the Zip_Code relation
Implement validation rules for foreign keys of relevant tables in case of data migrations (1) Error logs are generated for foreign key violations.
Implement proper synchronising mechanisms to handle data updates when there are concurrent operations or distributed databases. (1) Locking mechanisms to data objects while being updated
Ensure the consistency of the data model when changes are done to process model (software) (1) Data dictionary provides the FDs and CFDs

Validation Metric:

How mature is the creation and implementation of the DQ rules to maintain referential integrity

These are examples of how the characteristic might occur in a database.

Example: Source:
the name of the city and the postal code should be consistent. This can be enabled by entering just the postal code and filling in the name of the city systematically through the use of referential integrity with a postal code table Y. Lee, et al., “Journey to Data Quality”, Massachusetts Institute of Technology, 2006.
A company has a color field that only records red, blue, and yellow. A new requirement makes them decide to break each of these colors down to multiple shadings and thus institute a scheme of recording up to 30 different colors, all of which are variations of red, blue, and yellow. None of the old records are updated to the new scheme, as only new records use it. This data- base will have inconsistency of representation of color that crosses a point in time. J. E. Olson, “Data Quality: The Accuracy Dimension”, Morgan Kaufmann Publishers, 9 January 2003.

The Definitions are examples of the characteristic that appear in the sources provided.

Definition: Source:
The Information Float or Lag Time is acceptable between (a) when data is knowable (create or changed) in one data store to (b) when it is also knowable in a redundant or distributed data store, and concurrent queries to each data store produce the same result. ENGLISH, L. P. 2009. Information quality applied: Best practices for improving business information, processes and systems, Wiley Publishing.
Assigning unique identifiers to objects (customers, products, etc.) within your environment simplifies the management of your data, but introduces new expectations that any time an object identifier is used as foreign keys within a data set to refer to the core representation, that core representation actually exists. LOSHIN, D. 2006. Monitoring Data quality Performance using Data Quality Metrics. Informatica Corporation.
i.e. integrity rules. Data follows specified database integrity rules. PRICE, R. J. & SHANKS, G. Empirical refinement of a semiotic information quality framework. System Sciences, 2005. HICSS'05. Proceedings of the 38th Annual Hawaii International Conference on, 2005. IEEE, 216a-216a.